Search results

1 – 5 of 5
Article
Publication date: 3 July 2020

Mohammad Khalid Pandit, Roohie Naaz Mir and Mohammad Ahsan Chishti

The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational…

Abstract

Purpose

The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.

Design/methodology/approach

To realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.

Findings

Experimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.

Originality/value

The proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 18 August 2021

Jameel Ahamed, Roohie Naaz Mir and Mohammad Ahsan Chishti

A huge amount of diverse data is generated in the Internet of Things (IoT) because of heterogeneous devices like sensors, actuators, gateways and many more. Due to assorted nature…

Abstract

Purpose

A huge amount of diverse data is generated in the Internet of Things (IoT) because of heterogeneous devices like sensors, actuators, gateways and many more. Due to assorted nature of devices, interoperability remains a major challenge for IoT system developers. The purpose of this study is to use mapping techniques for converting relational database (RDB) to resource directory framework (RDF) for the development of ontology. Ontology helps in achieving semantic interoperability in application areas of IoT which results in shared/common understanding of the heterogeneous data generated by the diverse devices used in health-care domain.

Design/methodology/approach

To overcome the issue of semantic interoperability in healthcare domain, the authors developed an ontology for patients having cardio vascular diseases. Patients located at any place around the world can be diagnosed by Heart Experts located at another place by using this approach. This mechanism deals with the mapping of heterogeneous data into the RDF format in an integrated and interoperable manner. This approach is used to integrate the diverse data of heart patients needed for diagnosis with respect to cardio vascular diseases. This approach is also applicable in other fields where IoT is mostly used.

Findings

Experimental results showed that the RDF works better than the relational database for semantic interoperability in the IoT. This concept-based approach is better than key-based approach and reduces the computation time and storage of the data.

Originality/value

The proposed approach helps in overcoming the demerits of relational database like standardization, expressivity, provenance and supports SPARQL. Therefore, it helps to overcome the heterogeneity, thereby enabling the semantic interoperability in IoT.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 5 August 2019

Mohammad Irfan Bala and Mohammad Ahsan Chishti

Fog computing is a new field of research and has emerged as a complement to the cloud which can mitigate the problems inherent to the cloud computing model such as unreliable…

Abstract

Purpose

Fog computing is a new field of research and has emerged as a complement to the cloud which can mitigate the problems inherent to the cloud computing model such as unreliable latency, bandwidth constraints, security and mobility. This paper aims to provide detailed survey in the field of fog computing covering the current state-of-the-art in fog computing.

Design/methodology/approach

Cloud was developed for IT and not for Internet of Things (IoT); as a result, cloud is unable to meet the computing, storage, control and networking demands of the IoT applications. Fog is a companion for the cloud and aims to extend the cloud capabilities to the edge of the network.

Findings

Lack of survey papers in the area of fog computing was an important motivational factor for writing this paper. This paper highlights the capabilities of the fog computing and where it fits in between IoT and cloud. This paper has also presented architecture of the fog computing model and its characteristics. Finally, the challenges in the field of fog computing have been discussed in detail which need to be overcome to realize its full potential.

Originality/value

This paper presents the current state-of-the-art in fog computing. Lack of such papers increases the importance of this paper. It also includes challenges and opportunities in the fog computing and various possible solutions to overcome those challenges.

Details

International Journal of Pervasive Computing and Communications, vol. 15 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 15 September 2020

Ab Rouf Khan and Mohammad Ahsan Chishti

The purpose of this study is to exploit the lowest common ancestor technique in an m-ary data aggregation tree in the fog computing-enhanced IoT to assist in contact tracing in…

176

Abstract

Purpose

The purpose of this study is to exploit the lowest common ancestor technique in an m-ary data aggregation tree in the fog computing-enhanced IoT to assist in contact tracing in COVID-19. One of the promising characteristics of the Internet of Things (IoT) that can be used to save the world from the current crisis of COVID-19 pandemic is data aggregation. As the number of patients infected by the disease is already huge, the data related to the different attributes of patients such as patient thermal image record and the previous health record of the patient is going to be gigantic. The authors used the technique of data aggregation to efficiently aggregate the sensed data from the patients and analyse it. Among the various inferences drawn from the aggregated data, one of the most important is contact tracing. Contact tracing in COVID-19 deals with finding out a person or a group of persons who have infected or were infected by the disease.

Design/methodology/approach

The authors propose to exploit the technique of lowest common ancestor in an m-ary data aggregation tree in the Fog-Computing enhanced IoT to help the health-care experts in contact tracing in a particular region or community. In this research, the authors argue the current scenario of COVID-19 pandemic, finding the person or a group of persons who has/have infected a group of people is of extreme importance. Finding the individuals who have been infected or are infecting others can stop the pandemic from worsening by stopping the community transfer. In a community where the outbreak has spiked, the samples from either all the persons or the patients showing the symptoms are collected and stored in an m-ary tree-based structure sorted over time.

Findings

Contact tracing in COVID-19 deals with finding out a person or a group of persons who have infected or were infected by the disease. The authors exploited the technique of lowest common ancestor in an m-ary data aggregation tree in the fog-computing-enhanced IoT to help the health-care experts in contact tracing in a particular region or community. The simulations were carried randomly on a set of individuals. The proposed algorithm given in Algorithm 1 is executed on the samples collected at level-0 of the simulation model, and to aggregate the data and transmit the data, the authors implement Algorithm 2 at the level-1. It is found from the results that a carrier can be easily identified from the samples collected using the approach designed in the paper.

Practical implications

The work presented in the paper can aid the health-care experts fighting the COVID-19 pandemic by reducing the community transfer with efficient contact tracing mechanism proposed in the paper.

Social implications

Fighting COVID-19 efficiently and saving the humans from the pandemic has huge social implications in the current times of crisis.

Originality/value

To the best of the authors’ knowledge, the lowest common ancestor technique in m-ary data aggregation tree in the fog computing-enhanced IoT to contact trace the individuals who have infected or were infected during the transmission of COVID-19 is first of its kind proposed. Creating a graph or an m-ary tree based on the interactions/connections between the people in a particular community like location, friends and time, the authors can attempt to traverse it to find out who infected any two persons or a group of persons or was infected by exploiting the technique of finding out the lowest common ancestor in a m-ary tree.

Details

International Journal of Pervasive Computing and Communications, vol. 18 no. 5
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 10 February 2022

Jameel Ahamed, Roohie Naaz Mir and Mohammad Ahsan Chishti

The world is shifting towards the fourth industrial revolution (Industry 4.0), symbolising the move to digital, fully automated habitats and cyber-physical systems. Industry 4.0…

Abstract

Purpose

The world is shifting towards the fourth industrial revolution (Industry 4.0), symbolising the move to digital, fully automated habitats and cyber-physical systems. Industry 4.0 consists of innovative ideas and techniques in almost all sectors, including Smart health care, which recommends technologies and mechanisms for early prediction of life-threatening diseases. Cardiovascular disease (CVD), which includes stroke, is one of the world’s leading causes of sickness and deaths. As per the American Heart Association, CVDs are a leading cause of death globally, and it is believed that COVID-19 also influenced the health of cardiovascular and the number of patients increases as a result. Early detection of such diseases is one of the solutions for a lower mortality rate. In this work, early prediction models for CVDs are developed with the help of machine learning (ML), a form of artificial intelligence that allows computers to learn and improve on their own without requiring to be explicitly programmed.

Design/methodology/approach

The proposed CVD prediction models are implemented with the help of ML techniques, namely, decision tree, random forest, k-nearest neighbours, support vector machine, logistic regression, AdaBoost and gradient boosting. To mitigate the effect of over-fitting and under-fitting problems, hyperparameter optimisation techniques are used to develop efficient disease prediction models. Furthermore, the ensemble technique using soft voting is also used to gain more insight into the data set and accurate prediction models.

Findings

The models were developed to help the health-care providers with the early diagnosis and prediction of heart disease patients, reducing the risk of developing severe diseases. The created heart disease risk evaluation model is built on the Jupyter Notebook Web application, and its performance is calculated using unbiased indicators such as true positive rate, true negative rate, accuracy, precision, misclassification rate, area under the ROC curve and cross-validation approach. The results revealed that the ensemble heart disease model outperforms the other proposed and implemented models.

Originality/value

The proposed and developed CVD prediction models aims at predicting CVDs at an early stage, thereby taking prevention and precautionary measures at a very early stage of the disease to abate the predictive maintenance as recommended in Industry 4.0. Prediction models are developed on algorithms’ default values, hyperparameter optimisations and ensemble techniques.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 5 of 5